5 research outputs found

    Robust manipulability-centric object detection in time-of-flight camera point clouds

    Full text link
    This paper presents a method for robustly identifying the manipulability of objects in a scene based on the capabilities of the manipulator. The method uses a directed histogram search of a time-of-flight camera generated 3D point cloud that exploits the logical connection between objects and the respective supporting surface to facilitate scene segmentation. Once segmented the points above the supporting surface are searched, again with a directed histogram, and potentially manipulatable objects identified. Finally, the manipulatable objects in the scene are identified as those from the potential objects set that are within the manipulators capabilities. It is shown empirically that the method robustly detects the supporting surface with ±15mm accuracy and successfully discriminates between graspable and non-graspable objects in cluttered and complex scenes

    RobotAssist - A platform for human robot interaction research

    Full text link
    This paper presents RobotAssist, a robotic platform designed for use in human robot interaction research and for entry into Robocup@Home competition. The core autonomy of the system is implemented as a component based software framework that allows for integration of operating system independent components, is designed to be expandable and integrates several layers of reasoning. The approaches taken to develop the core capabilities of the platform are described, namely: path planning in a social context, Simultaneous Localisation and Mapping (SLAM), human cue sensing and perception, manipulatable object detection and manipulation

    Influence of robot-issued joint attention cues on gaze and preference

    Full text link
    If inadvertently perceived as Joint Attention, a robot's incidental behaviors could potentially influence preferences of observing humans. A study was conducted with 16 robot-näive participants to explore the influences of robot-issued Joint Attention cues during decision-making. The results suggest that Joint Attention is transferable to HRI and can influence the process and outcome of human decision-making. © 2013 IEEE

    Head pose behavior in the Human-Robot Interaction space

    Full text link
    Visual Focus of Attention is an important mechanism to support successful interactions. In order to communicate effectively and intentionally (issuing cues when a person is paying attention, for example), a robot must have an understanding of this Visual Focus of Attention behavior in the Human-Robot Interaction space. A real-world interaction study was conducted with 24 unsolicited participants to explore attention behavior towards robots in this space. The results suggest there is no generalizable attention pattern between people, and thus that online, in situ Visual Focus of Attention estimation would be advantageous to Human-Robot Interaction
    corecore